Current Issue : October - December Volume : 2016 Issue Number : 4 Articles : 5 Articles
Structural load types, on the one hand, and structural capacity to withstand these loads, on the other hand, are of a probabilistic\nnature as they cannot be calculated and presented in a fully deterministic way. As such, the past few decades have witnessed\nthe development of numerous probabilistic approaches towards the analysis and design of structures. Among the conventional\nmethods used to assess structural reliability, the Monte Carlo sampling method has proved to be very convenient and efficient.\nHowever, it does suffer from certain disadvantages, the biggest one being the requirement of a very large number of samples to\nhandle small probabilities, leading to a high computational cost. In this paper, a simple algorithm was proposed to estimate low\nfailure probabilities using a small number of samples in conjunction with theMonte Carlo method. This revised approach was then\npresented in a step-by-step flowchart, for the purpose of easy programming and implementation....
We present Bayes estimators, highest posterior density (HPD) intervals, and maximum likelihood\nestimators (MLEs), for the Maxwell failure distribution based on Type II censored data, i.e. using\nthe first r lifetimes from a group of n components under test. Reliability/Hazard function estimates,\nBayes predictive distributions and highest posterior density prediction intervals for a future\nobservation are also considered. Two data examples and a Monte Carlo simulation study are\nused to illustrate the results and to compare the performances of the different methods....
In the investigation of disease dynamics, the effect of covariates on the hazard function is a major\ntopic. Some recent smoothed estimation methods have been proposed, both frequentist and Bayesian,\nbased on the relationship between penalized splines and mixed models theory. These approaches\nare also motivated by the possibility of using automatic procedures for determining the\noptimal amount of smoothing. However, estimation algorithms involve an analytically intractable\nhazard function, and thus require ad-hoc software routines. We propose a more user-friendly alternative,\nconsisting in regularized estimation of piecewise exponential models by Bayesian\nP-splines. A further facilitation is that widespread Bayesian software, such as WinBUGS, can be\nused. The aim is assessing the robustness of this approach with respect to different prior functions\nand penalties. A large dataset from breast cancer patients, where results from validated clinical\nstudies are available, is used as a benchmark to evaluate the reliability of the estimates. A second\ndataset from a small case series of sarcoma patients is used for evaluating the performances of the\nPE model as a tool for exploratory analysis. Concerning breast cancer data, the estimates are robust\nwith respect to priors and penalties, and consistent with clinical knowledge. Concerning soft\ntissue sarcoma data, the estimates of the hazard function are sensitive with respect to the prior for\nthe smoothing parameter, whereas the estimates of regression coefficients are robust. In conclusion,\nGibbs sampling results an efficient computational strategy. The issue of the sensitivity with\nrespect to the priors concerns only the estimates of the hazard function, and seems more likely to occur when non-large case series are investigated, calling for tailored solutions....
The output performance of the manufacturing system has a direct impact on the mechanical product quality. For guaranteeing\nproduct quality and production cost, many firms try to research the crucial issues on reliability of the manufacturing system with\nsmall sample data, to evaluate whether the manufacturing system is capable or not. The existing reliability methods depend on\na known probability distribution or vast test data. However, the population performances of complex systems become uncertain\nas processing time; namely, their probability distributions are unknown, if the existing methods are still taken into account; it\nis ineffective. This paper proposes a novel evaluation method based on poor information to settle the problems of reliability of\nthe running state of a manufacturing system under the condition of small sample sizes with a known or unknown probability\ndistribution. Via grey bootstrap method, maximum entropy principle, and Poisson process, the experimental investigation on\nreliability evaluation for the running state of the manufacturing system shows that, under the best confidence level P = 0.95, if\nthe reliability degree of achieving running quality is r > 0.65, the intersection area between the inspection data and the intrinsic\ndata is A(T) > 0.3 and the variation probability of the inspection data is Pn(T) � 0.7, and the running state of the manufacturing\nsystem is reliable; otherwise, it is not reliable. And the sensitivity analysis regarding the size of the samples can show that the size of\nthe samples has no effect on the evaluation results obtained by the evaluation method.The evaluation method proposed provides\nthe scientific decision and suggestion for judging the running state of the manufacturing system reasonably, which is efficient,\nprofitable, and organized....
Chains are typically used for tension load transfer. They are very flexible and allow easy length\nadjustment by hooking at the links. Steel is the traditional material for chains. Recently, synthetic\nlink chains made from ultra-strong polyethylene fibers, branded as Dyneema�®, are commercially\navailable. These chains offer a highly improved strength to weight ratio. So far, one type of such\nchains is available, and it has a Working Load Limit of 100 kN. 50 of such chains, containing 6 links\nwere tested to fracture. The strength of each chain and the location of the failed link were documented\nduring testing for later interpretation. Weibull statistics was applied in order to extrapolate\ntowards the allowable load for very low failure risks (high reliability). Two approaches were\nused. One extrapolation was based on all results; the other was applied after recognition that the\nend links failed under a slight negative influence by the connection to the testing equipment. Thus,\nin fact two populations are mixed, the chains with failing end links and the chains with failing central\nlinks. So considering the population without the failing end links is more representative for\npure chain behavior without clamping effects. The results from this latter consideration showed a\nhigher Weibull exponent, thus a more realistic extrapolation behavior. Both methods indicate that\nthe reliability at the working load limit of 100 kN is very good....
Loading....